Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(cloudnative-pg): update helm-release to v0.19.0 #2035

Merged
merged 1 commit into from
Oct 19, 2023

Conversation

tyriis-automation[bot]
Copy link
Contributor

This PR contains the following updates:

Package Update Change
cloudnative-pg (source) minor 0.18.2 -> 0.19.0

Release Notes

cloudnative-pg/charts (cloudnative-pg)

v0.19.0

Compare Source


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@tyriis-automation tyriis-automation bot added renovate/flux renovate flux manager renovate/helm renovate helm datasource type/minor a minor update labels Oct 19, 2023
@tyriis-automation
Copy link
Contributor Author

--- kubernetes HelmRelease: database/cloudnative-pg Deployment: database/cloudnative-pg

+++ kubernetes HelmRelease: database/cloudnative-pg Deployment: database/cloudnative-pg

@@ -27,20 +27,20 @@

         - --secret-name=cnpg-controller-manager-config
         - --webhook-port=9443
         command:
         - /manager
         env:
         - name: OPERATOR_IMAGE_NAME
-          value: ghcr.io/cloudnative-pg/cloudnative-pg:1.20.2
+          value: ghcr.io/cloudnative-pg/cloudnative-pg:1.21.0
         - name: OPERATOR_NAMESPACE
           valueFrom:
             fieldRef:
               fieldPath: metadata.namespace
         - name: MONITORING_QUERIES_CONFIGMAP
           value: cnpg-default-monitoring
-        image: ghcr.io/cloudnative-pg/cloudnative-pg:1.20.2
+        image: ghcr.io/cloudnative-pg/cloudnative-pg:1.21.0
         imagePullPolicy: IfNotPresent
         livenessProbe:
           httpGet:
             path: /readyz
             port: 9443
             scheme: HTTPS
@@ -65,12 +65,14 @@

           capabilities:
             drop:
             - ALL
           readOnlyRootFilesystem: true
           runAsGroup: 10001
           runAsUser: 10001
+          seccompProfile:
+            type: RuntimeDefault
         volumeMounts:
         - mountPath: /controller
           name: scratch-data
         - mountPath: /run/secrets/cnpg.io/webhook
           name: webhook-certificates
       securityContext:
--- kubernetes HelmRelease: database/cloudnative-pg MutatingWebhookConfiguration: database/cnpg-mutating-webhook-configuration

+++ kubernetes HelmRelease: database/cloudnative-pg MutatingWebhookConfiguration: database/cnpg-mutating-webhook-configuration

@@ -14,13 +14,13 @@

     service:
       name: cnpg-webhook-service
       namespace: database
       path: /mutate-postgresql-cnpg-io-v1-backup
       port: 443
   failurePolicy: Fail
-  name: mbackup.kb.io
+  name: mbackup.cnpg.io
   rules:
   - apiGroups:
     - postgresql.cnpg.io
     apiVersions:
     - v1
     operations:
@@ -35,13 +35,13 @@

     service:
       name: cnpg-webhook-service
       namespace: database
       path: /mutate-postgresql-cnpg-io-v1-cluster
       port: 443
   failurePolicy: Fail
-  name: mcluster.kb.io
+  name: mcluster.cnpg.io
   rules:
   - apiGroups:
     - postgresql.cnpg.io
     apiVersions:
     - v1
     operations:
@@ -56,13 +56,13 @@

     service:
       name: cnpg-webhook-service
       namespace: database
       path: /mutate-postgresql-cnpg-io-v1-scheduledbackup
       port: 443
   failurePolicy: Fail
-  name: mscheduledbackup.kb.io
+  name: mscheduledbackup.cnpg.io
   rules:
   - apiGroups:
     - postgresql.cnpg.io
     apiVersions:
     - v1
     operations:
--- kubernetes HelmRelease: database/cloudnative-pg ClusterRole: database/cloudnative-pg-edit

+++ kubernetes HelmRelease: database/cloudnative-pg ClusterRole: database/cloudnative-pg-edit

@@ -0,0 +1,24 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: cloudnative-pg-edit
+  labels:
+    app.kubernetes.io/name: cloudnative-pg
+    app.kubernetes.io/instance: cloudnative-pg
+    app.kubernetes.io/managed-by: Helm
+rules:
+- apiGroups:
+  - postgresql.cnpg.io
+  resources:
+  - backups
+  - clusters
+  - poolers
+  - scheduledbackups
+  verbs:
+  - create
+  - delete
+  - deletecollection
+  - patch
+  - update
+
--- kubernetes HelmRelease: database/cloudnative-pg ClusterRole: database/cloudnative-pg

+++ kubernetes HelmRelease: database/cloudnative-pg ClusterRole: database/cloudnative-pg

@@ -326,7 +326,17 @@

   - create
   - get
   - list
   - patch
   - update
   - watch
+- apiGroups:
+  - snapshot.storage.k8s.io
+  resources:
+  - volumesnapshots
+  verbs:
+  - create
+  - get
+  - list
+  - patch
+  - watch
 
--- kubernetes HelmRelease: database/cloudnative-pg ConfigMap: database/cnpg-default-monitoring

+++ kubernetes HelmRelease: database/cloudnative-pg ConfigMap: database/cnpg-default-monitoring

@@ -107,20 +107,22 @@

       metrics:
         - start_time:
             usage: "GAUGE"
             description: "Time at which postgres started (based on epoch)"
 
     pg_replication:
-      query: "SELECT CASE WHEN NOT pg_catalog.pg_is_in_recovery()
+      query: "SELECT CASE WHEN (
+                NOT pg_catalog.pg_is_in_recovery()
+                OR pg_catalog.pg_last_wal_receive_lsn() = pg_catalog.pg_last_wal_replay_lsn())
               THEN 0
               ELSE GREATEST (0,
                 EXTRACT(EPOCH FROM (now() - pg_catalog.pg_last_xact_replay_timestamp())))
               END AS lag,
               pg_catalog.pg_is_in_recovery() AS in_recovery,
               EXISTS (TABLE pg_stat_wal_receiver) AS is_wal_receiver_up,
-              (SELECT count(*) FROM pg_stat_replication) AS streaming_replicas"
+              (SELECT count(*) FROM pg_catalog.pg_stat_replication) AS streaming_replicas"
       metrics:
         - lag:
             usage: "GAUGE"
             description: "Replication lag behind primary in seconds"
         - in_recovery:
             usage: "GAUGE"
@@ -135,13 +137,16 @@

     pg_replication_slots:
       query: |
         SELECT slot_name,
           slot_type,
           database,
           active,
-          pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), restart_lsn)
+          (CASE pg_catalog.pg_is_in_recovery()
+            WHEN TRUE THEN pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_receive_lsn(), restart_lsn)
+            ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), restart_lsn)
+          END) as pg_wal_lsn_diff
         FROM pg_catalog.pg_replication_slots
         WHERE NOT temporary
       metrics:
         - slot_name:
             usage: "LABEL"
             description: "Name of the replication slot"
@@ -316,12 +321,13 @@

     pg_stat_replication:
       primary: true
       query: |
        SELECT usename
          , COALESCE(application_name, '') AS application_name
          , COALESCE(client_addr::text, '') AS client_addr
+         , COALESCE(client_port::text, '') AS client_port
          , EXTRACT(EPOCH FROM backend_start) AS backend_start
          , COALESCE(pg_catalog.age(backend_xmin), 0) AS backend_xmin_age
          , pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), sent_lsn) AS sent_diff_bytes
          , pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), write_lsn) AS write_diff_bytes
          , pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), flush_lsn) AS flush_diff_bytes
          , COALESCE(pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), replay_lsn),0) AS replay_diff_bytes
@@ -336,12 +342,15 @@

         - application_name:
             usage: "LABEL"
             description: "Name of the application"
         - client_addr:
             usage: "LABEL"
             description: "Client IP address"
+        - client_port:
+            usage: "LABEL"
+            description: "Client TCP port"
         - backend_start:
             usage: "COUNTER"
             description: "Time when this process was started"
         - backend_xmin_age:
             usage: "COUNTER"
             description: "The age of this standby's xmin horizon"
--- kubernetes HelmRelease: database/cloudnative-pg ClusterRole: database/cloudnative-pg-view

+++ kubernetes HelmRelease: database/cloudnative-pg ClusterRole: database/cloudnative-pg-view

@@ -0,0 +1,22 @@

+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: cloudnative-pg-view
+  labels:
+    app.kubernetes.io/name: cloudnative-pg
+    app.kubernetes.io/instance: cloudnative-pg
+    app.kubernetes.io/managed-by: Helm
+rules:
+- apiGroups:
+  - postgresql.cnpg.io
+  resources:
+  - backups
+  - clusters
+  - poolers
+  - scheduledbackups
+  verbs:
+  - get
+  - list
+  - watch
+
--- kubernetes HelmRelease: database/cloudnative-pg ValidatingWebhookConfiguration: database/cnpg-validating-webhook-configuration

+++ kubernetes HelmRelease: database/cloudnative-pg ValidatingWebhookConfiguration: database/cnpg-validating-webhook-configuration

@@ -14,13 +14,13 @@

     service:
       name: cnpg-webhook-service
       namespace: database
       path: /validate-postgresql-cnpg-io-v1-backup
       port: 443
   failurePolicy: Fail
-  name: vbackup.kb.io
+  name: vbackup.cnpg.io
   rules:
   - apiGroups:
     - postgresql.cnpg.io
     apiVersions:
     - v1
     operations:
@@ -35,13 +35,13 @@

     service:
       name: cnpg-webhook-service
       namespace: database
       path: /validate-postgresql-cnpg-io-v1-cluster
       port: 443
   failurePolicy: Fail
-  name: vcluster.kb.io
+  name: vcluster.cnpg.io
   rules:
   - apiGroups:
     - postgresql.cnpg.io
     apiVersions:
     - v1
     operations:
@@ -56,13 +56,13 @@

     service:
       name: cnpg-webhook-service
       namespace: database
       path: /validate-postgresql-cnpg-io-v1-scheduledbackup
       port: 443
   failurePolicy: Fail
-  name: vscheduledbackup.kb.io
+  name: vscheduledbackup.cnpg.io
   rules:
   - apiGroups:
     - postgresql.cnpg.io
     apiVersions:
     - v1
     operations:
@@ -77,13 +77,13 @@

     service:
       name: cnpg-webhook-service
       namespace: database
       path: /validate-postgresql-cnpg-io-v1-pooler
       port: 443
   failurePolicy: Fail
-  name: vpooler.kb.io
+  name: vpooler.cnpg.io
   rules:
   - apiGroups:
     - postgresql.cnpg.io
     apiVersions:
     - v1
     operations:

@tyriis-automation
Copy link
Contributor Author

--- kubernetes/talos-flux/apps/database/cloudnative-pg/app Kustomization: flux-system/apps-cloudnative-pg HelmRelease: database/cloudnative-pg

+++ kubernetes/talos-flux/apps/database/cloudnative-pg/app Kustomization: flux-system/apps-cloudnative-pg HelmRelease: database/cloudnative-pg

@@ -9,13 +9,13 @@

     spec:
       chart: cloudnative-pg
       sourceRef:
         kind: HelmRepository
         name: cloudnative-pg-charts
         namespace: flux-system
-      version: 0.18.2
+      version: 0.19.0
   install:
     createNamespace: true
     remediation:
       retries: 3
   interval: 15m
   maxHistory: 15

@github-actions
Copy link
Contributor

🦙 MegaLinter status: ✅ SUCCESS

Descriptor Linter Files Fixed Errors Elapsed time
✅ EDITORCONFIG editorconfig-checker 1 0 0.01s
✅ REPOSITORY gitleaks yes no 2.62s
✅ YAML prettier 1 0 0.72s
✅ YAML yamllint 1 0 0.38s

See detailed report in MegaLinter reports
Set VALIDATE_ALL_CODEBASE: true in mega-linter.yml to validate all sources, not only the diff

MegaLinter is graciously provided by OX Security

@tyriis-automation tyriis-automation bot merged commit 29f787c into main Oct 19, 2023
4 checks passed
@tyriis-automation tyriis-automation bot deleted the renovate/cloudnative-pg-0.x branch October 19, 2023 12:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
renovate/flux renovate flux manager renovate/helm renovate helm datasource type/minor a minor update
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants